Ab>initio, Big Data, Informatica, Tableau, Data Architect, Cognos, Microstrategy, Healther Business Analysts, Cloud etc.
at Exusia

About Exusia
About
Connect with the team
Similar jobs


We are seeking a skilled and innovative Developer with strong expertise in Scala, Java/Python and Spark/Hadoop to join our dynamic team.
Key Responsibilities:
• Design, develop, and maintain robust and scalable backend systems using Scala, Spark, Hadoop and expertise in Python/Java.
• Build and deploy highly efficient, modular, and maintainable microservices architecture for enterprise-level applications.
• Write and optimize algorithms to enhance application performance and scalability.
Required Skills:
• Programming: Expert in Scala and object-oriented programming.
• Frameworks: Hands-on experience with Spark and Hadoop
• Databases: Experience with relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB).
• Location: Mumbai
• Employment Type: Full-time
Power BI Developer(Azure Developer )
Job Description:
Senior visualization engineer with understanding in Azure Data Factory & Databricks to develop and deliver solutions that enable delivery of information to audiences in support of key business processes.
Ensure code and design quality through execution of test plans and assist in development of standards & guidelines working closely with internal and external design, business and technical counterparts.
Desired Competencies:
- Strong designing concepts of data visualization centered on business user and a knack of communicating insights visually.
- Ability to produce any of the charting methods available with drill down options and action-based reporting. This includes use of right graphs for the underlying data with company themes and objects.
- Publishing reports & dashboards on reporting server and providing role-based access to users.
- Ability to create wireframes on any tool for communicating the reporting design.
- Creation of ad-hoc reports & dashboards to visually communicate data hub metrics (metadata information) for top management understanding.
- Should be able to handle huge volume of data from databases such as SQL Server, Synapse, Delta Lake or flat files and create high performance dashboards.
- Should be good in Power BI development
- Expertise in 2 or more BI (Visualization) tools in building reports and dashboards.
- Understanding of Azure components like Azure Data Factory, Data lake Store, SQL Database, Azure Databricks
- Strong knowledge in SQL queries
- Must have worked in full life-cycle development from functional design to deployment
- Intermediate understanding to format, process and transform data
- Should have working knowledge of GIT, SVN
- Good experience in establishing connection with heterogeneous sources like Hadoop, Hive, Amazon, Azure, Salesforce, SAP, HANA, API’s, various Databases etc.
- Basic understanding of data modelling and ability to combine data from multiple sources to create integrated reports
Preferred Qualifications:
- Bachelor's degree in Computer Science or Technology
- Proven success in contributing to a team-oriented environment
Responsibilities:
· Analyze complex data sets to answer specific questions using MMIT’s market access data (MMIT) and Norstella claims data, third-party claims data (IQVIA LAAD, Symphony SHA). Applicant must have experience working with the aforementioned data sets exclusively.
· Deliver consultative services to clients related to MMIT RWD sets
· Produce complex analytical reports using data visualization tools such as Power BI or Tableau
· Define customized technical specifications to surface MMIT RWD in MMIT tools.
· Execute work in a timely fashion with high accuracy, while managing various competing priorities; Perform thorough troubleshooting and execute QA; Communicate with internal teams to obtain required data
· Ensure adherence to documentation requirements, process workflows, timelines, and escalation protocols
· And other duties as assigned.
Requirements:
· Bachelor’s Degree or relevant experience required
· 2-5 yrs. of professional experience in RWD analytics using SQL
· Fundamental understanding of Pharma and Market access space
· Strong analysis skills and proficiency with tools such as Tableau or PowerBI
· Excellent written and verbal communication skills.
· Analytical, critical thinking and creative problem-solving skills.
· Relationship building skills.
· Solid organizational skills including attention to detail and multitasking skills.
· Excellent time management and prioritization skills.

- Own the product analytics of bidgely’s end user-facing products, measure and identify areas of improvement through data
- Liaise with Product Managers and Business Leaders to understand the product issues, priorities and hence support them through relevant product analytics
- Own the automation of product analytics through good SQL knowledge
- Develop early warning metrics for production and highlight issues and breakdowns for resolution
- Resolve client escalations and concerns regarding key business metrics
- Define and own execution
- Own the Energy Efficiency program designs, dashboard development, and monitoring of existing Energy efficiency program
- Deliver data-backed analysis and statistically proven solutions
- Research and implement best practices
- Mentor team of analysts
Qualifications and Education Requirements
- B.Tech from a premier institute with 5+ years analytics experience or Full-time MBA from a premier b-school with 3+ years of experience in analytics/business or product analytics
- Bachelor's degree in Business, Computer Science, Computer Information Systems, Engineering, Mathematics, or other business/analytical disciplines
Skills needed to excel
- Proven analytical and quantitative skills and an ability to use data and metrics to back up assumptions, develop business cases, and complete root cause
analyses - Excellent understanding of retention, churn, and acquisition of user base
- Ability to employ statistics and anomaly detection techniques for data-driven
analytics - Ability to put yourself in the shoes of the end customer and understand what
“product excellence” means - Ability to rethink existing products and use analytics to identify new features and product improvements.
- Ability to rethink existing processes and design new processes for more effective analyses
- Strong SQL knowledge, working experience with Looker and Tableau a great plus
- Strong commitment to quality visible in the thoroughness of analysis and techniques employed
- Strong project management and leadership skills
- Excellent communication (oral and written) and interpersonal skills and an ability to effectively communicate with both business and technical teams
- Ability to coach and mentor analysts on technical and analytical skills
- Good knowledge of statistics, basic machine learning, and AB Testing is
preferable - Experience as a Growth hacker and/or in Product analytics is a big plus
- Produce clean code and automated tests
- Align with enterprise architecture frameworks and standards
- Be the role-model for all engineers in the team in terms of technical competency
- Research, assess and adopt new technologies as required
- Be a guide and mentor to the team members and help in ramping up the overall skill-base of the team.
- Produce detailed estimates and optimized work plans for requirements and changes
- Ensure that features are delivered on time and that they meet the business needs
- Strive for quality of performance, usability, reliability, maintainability, and extensibility
- Identify opportunities for process and tool improvements
- Use analytical rigor to produce effective solutions to poorly defined problems
- Follow Build to Ship mantra in practice with full Dev Ops implementation
- 10+ years of core software development and product creation experience in CPaaS.
- Working knowledge in VoIP, communication API , J2EE, JMS/ Kafka, Web-Services, Hadoop, React, Node.js, GoLang.
- Working knowledge in Various CPaaS channels - SMS, voice, WhatsApp, RCS, Email.
- Working knowledge of DevOps, automation testing, test driven development, behavior driven development, server-less or micro-services
- Experience with AWS / Azure deployments
- Solid background in large scale software development.
- Full stack understanding of web/mobile/API/database development concepts and patterns
- Exposure to Microservices, Iaas, PaaS, service mesh, SaaS and cloud native application development.
- Understanding of Agile Scrum and SDLC principles.
- Containerization and orchestrations:- Dockers, kuberenetes, openshift, consule etc.
- Knowledge on NFV (openstack, Vsphere, Vcloud etc)
- Experience in Data Analytics/AI/ML or Marketing Tech domain is an added advantage

Must have experience in BFSC domain.
Exp- Min 3yrs
Location- Pune
Mandatory Skills- Exp in powerBI/Tableau,
SQL
Basic Python
Company Description
At Bungee Tech, we help retailers and brands meet customers everywhere and, on every occasion, they are in. We believe that accurate, high-quality data matched with compelling market insights empowers retailers and brands to keep their customers at the center of all innovation and value they are delivering.
We provide a clear and complete omnichannel picture of their competitive landscape to retailers and brands. We collect billions of data points every day and multiple times in a day from publicly available sources. Using high-quality extraction, we uncover detailed information on products or services, which we automatically match, and then proactively track for price, promotion, and availability. Plus, anything we do not match helps to identify a new assortment opportunity.
Empowered with this unrivalled intelligence, we unlock compelling analytics and insights that once blended with verified partner data from trusted sources such as Nielsen, paints a complete, consolidated picture of the competitive landscape.
We are looking for a Big Data Engineer who will work on the collecting, storing, processing, and analyzing of huge sets of data. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them.
You will also be responsible for integrating them with the architecture used in the company.
We're working on the future. If you are seeking an environment where you can drive innovation, If you want to apply state-of-the-art software technologies to solve real world problems, If you want the satisfaction of providing visible benefit to end-users in an iterative fast paced environment, this is your opportunity.
Responsibilities
As an experienced member of the team, in this role, you will:
- Contribute to evolving the technical direction of analytical Systems and play a critical role their design and development
- You will research, design and code, troubleshoot and support. What you create is also what you own.
- Develop the next generation of automation tools for monitoring and measuring data quality, with associated user interfaces.
- Be able to broaden your technical skills and work in an environment that thrives on creativity, efficient execution, and product innovation.
BASIC QUALIFICATIONS
- Bachelor’s degree or higher in an analytical area such as Computer Science, Physics, Mathematics, Statistics, Engineering or similar.
- 5+ years relevant professional experience in Data Engineering and Business Intelligence
- 5+ years in with Advanced SQL (analytical functions), ETL, Data Warehousing.
- Strong knowledge of data warehousing concepts, including data warehouse technical architectures, infrastructure components, ETL/ ELT and reporting/analytic tools and environments, data structures, data modeling and performance tuning.
- Ability to effectively communicate with both business and technical teams.
- Excellent coding skills in Java, Python, C++, or equivalent object-oriented programming language
- Understanding of relational and non-relational databases and basic SQL
- Proficiency with at least one of these scripting languages: Perl / Python / Ruby / shell script
PREFERRED QUALIFICATIONS
- Experience with building data pipelines from application databases.
- Experience with AWS services - S3, Redshift, Spectrum, EMR, Glue, Athena, ELK etc.
- Experience working with Data Lakes.
- Experience providing technical leadership and mentor other engineers for the best practices on the data engineering space
- Sharp problem solving skills and ability to resolve ambiguous requirements
- Experience on working with Big Data
- Knowledge and experience on working with Hive and the Hadoop ecosystem
- Knowledge of Spark
- Experience working with Data Science teams



We are looking for a Data Engineer that will be responsible for collecting, storing, processing, and analyzing huge sets of data that is coming from different sources.
Responsibilities
Working with Big Data tools and frameworks to provide requested capabilities Identify development needs in order to improve and streamline operations Develop and manage BI solutions Implementing ETL process and Data Warehousing Monitoring performance and managing infrastructure
Skills
Proficient understanding of distributed computing principles Proficiency with Hadoop and Spark Experience with building stream-processing systems, using solutions such as Kafka and Spark-Streaming Good knowledge of Data querying tools SQL and Hive Knowledge of various ETL techniques and frameworks Experience with Python/Java/Scala (at least one) Experience with cloud services such as AWS or GCP Experience with NoSQL databases, such as DynamoDB,MongoDB will be an advantage Excellent written and verbal communication skills

